64 research outputs found
Infeasibility-free inverse kinematics method
International audienceThe problem of inverse kinematics is revisited in the present paper. The paper is focusing on the problem of solving the inverse kinematics problem while respecting velocity limits on both the robot's joints and the end-effector. Even-though the conventional inverse kinematics algorithms have been proven to be efficient in many applications, defining an admissible trajectory for the end-effector is still a burdensome task for the user, and the problem can easily become unsolvable. The main idea behind the proposed algorithms is to consider the sampling time as a free variable, hence adding more flexibility to the optimization problem associated with the inverse kinematics. We prove that the reformulated problem has always a solution if the end-effector path is in the reachable space of the robot, thus solving the problem of infeasibility of conventional inverse kinematics methods. To validate the proposed approach, we have conducted three simulations scenarios. The simulation results point that while the conventional inverse kinematics methods fail to track precisely a desired end-effector trajectory, the proposed algorithms always succeed
Rapid Pose Label Generation through Sparse Representation of Unknown Objects
Deep Convolutional Neural Networks (CNNs) have been successfully deployed on
robots for 6-DoF object pose estimation through visual perception. However,
obtaining labeled data on a scale required for the supervised training of CNNs
is a difficult task - exacerbated if the object is novel and a 3D model is
unavailable. To this end, this work presents an approach for rapidly generating
real-world, pose-annotated RGB-D data for unknown objects. Our method not only
circumvents the need for a prior 3D object model (textured or otherwise) but
also bypasses complicated setups of fiducial markers, turntables, and sensors.
With the help of a human user, we first source minimalistic labelings of an
ordered set of arbitrarily chosen keypoints over a set of RGB-D videos. Then,
by solving an optimization problem, we combine these labels under a world frame
to recover a sparse, keypoint-based representation of the object. The sparse
representation leads to the development of a dense model and the pose labels
for each image frame in the set of scenes. We show that the sparse model can
also be efficiently used for scaling to a large number of new scenes. We
demonstrate the practicality of the generated labeled dataset by training a
pipeline for 6-DoF object pose estimation and a pixel-wise segmentation
network
TransFusionOdom: Interpretable Transformer-based LiDAR-Inertial Fusion Odometry Estimation
Multi-modal fusion of sensors is a commonly used approach to enhance the
performance of odometry estimation, which is also a fundamental module for
mobile robots. However, the question of \textit{how to perform fusion among
different modalities in a supervised sensor fusion odometry estimation task?}
is still one of challenging issues remains. Some simple operations, such as
element-wise summation and concatenation, are not capable of assigning adaptive
attentional weights to incorporate different modalities efficiently, which make
it difficult to achieve competitive odometry results. Recently, the Transformer
architecture has shown potential for multi-modal fusion tasks, particularly in
the domains of vision with language. In this work, we propose an end-to-end
supervised Transformer-based LiDAR-Inertial fusion framework (namely
TransFusionOdom) for odometry estimation. The multi-attention fusion module
demonstrates different fusion approaches for homogeneous and heterogeneous
modalities to address the overfitting problem that can arise from blindly
increasing the complexity of the model. Additionally, to interpret the learning
process of the Transformer-based multi-modal interactions, a general
visualization approach is introduced to illustrate the interactions between
modalities. Moreover, exhaustive ablation studies evaluate different
multi-modal fusion strategies to verify the performance of the proposed fusion
strategy. A synthetic multi-modal dataset is made public to validate the
generalization ability of the proposed fusion strategy, which also works for
other combinations of different modalities. The quantitative and qualitative
odometry evaluations on the KITTI dataset verify the proposed TransFusionOdom
could achieve superior performance compared with other related works.Comment: Submitted to IEEE Sensors Journal with some modifications. This work
has been submitted to the IEEE for possible publication. Copyright may be
transferred without notice, after which this version may no longer be
accessibl
Fast grasp planning for hand/arm systems based on convex model
Abstract—This paper discusses the grasp planning of a multifingered hand attached at the tip of a robotic arm. By using the convex models and the new approximation method of the friction cone, our proposed algorithm can calculate the grasping motion within the reasonable time. For each grasping style used in this research, we define the grasping rectangular convex (GRC). We also define the object convex polygon (OCP) for the grasped object. By considering the geometrical relashionship among these convex models, we determine several parameters needed to define the final grasping configuration. To determine the contact point position satisfying the force closure, we use two approximation models of the friction cone. To save the calculation time, the rough approximation by using the ellipsoid is mainly used to check the force closure. Additionally, approximation by using the convex polyhedral cone is used at the final stage of the planning. The effectiveness of the proposed method is confirmed by some numerical examples. I
Integrating geometric constraints into reactive leg motion generation
Abstract—This paper proposes a reactive leg motion gen-eration method which integrates geometric constraints into its generation process. In order to react given instructions instan-taneously or to keep balance against external disturbances, feasible steps must be generated automatically in real-time for safety. In many cases this feasibility has been realized by using predefined steps or admissible stepping regions. However, these predefinitions are often too conservative or valid only in limited situations. The proposed method considers geometric constraints in addition to joint limits during its generation process and it can utilize the ability of the robot to a maximum extent. It can generate feasible walking pattern in real-time by modifying the swing leg motion and the next landing position at each control cycle. The proposed method is validated by experiments using a humanoid robot HRP-2. I
- …